So I promised on Monday that I will reconsider the objective function we have to optimize
for MR image or intensity inhomogeneity correction and the idea of the FASCIS means
clustering is basically that we do both things in parallel. On the one hand, inhomogeneity
correction and on the other hand, the classification of the pixels in terms of their assignment to a
particular tissue class. These are the two aspects we do in parallel and that's the core idea of the
FASCIS means clustering. That means we want to eliminate the low frequency change in the intensity
surface and we want to reduce the number of different intensity values in the resulting
image to a limited number of tissue classes that are present in the image. Hello. And the side effect
is that we will end up with images that require a low number of bits for quantization. The images
usually look very different to the images you get out of the scanner and the practical use of these
images is rather limited in terms of diagnostic procedures. But anyways, it's a very smart algorithm
and it's a very important technique that allows us to eliminate the image in homogeneities properly.
And last Monday, I was explaining to you the term data similarity and the term prior knowledge. Data
similarity means we have something in our objective function that measures the similarity of the data
that are measured to some model that we want to construct. So here, for instance, this term
expresses how close the measurements, these are the intensity values we have measured, are to the
intensity value CI that represents the ice tissue class that we are considering. And here we have a
weighting factor that basically assigns a probability to the correspondence of XK to the class center CI.
That's the idea. So once again, what you have to keep in mind here is you have features in 2D,
it's easier to show than in 1D, X1, X2, and you have centers. This is C1 and this is C2. And here,
this is the Euclidean distance or Euclidean, the Euclidean distance of the vector that we are
currently considering to the center C1, respectively the distance to the center C2. If we have large
distances, that means the data points are very dissimilar. If we have small distances, we say
they are very similar. And here we have a pre-weighting factor that gives us a probability
that this vector here is assigned to this class and this vector here is assigned to that class.
And then I have explained to you prior knowledge. And prior or the regularizer that incorporates
prior knowledge is only considering parameters that belong to the model. So they are independent
of the X case, they are independent of the observations. And to remember this, 10 years from now,
keep in mind the simple example, the probability you get a newspaper, the probability that my name
shows up there is way smaller than the name of our chancellor. Because our chancellor is in press
every day and we know that. And I'm not in press at all. And that means without looking into the
newspaper, you have some prior knowledge about the probability of something to happen. Okay,
and now the point where I screwed the whole lecture up on Monday. The second term here is not part of
the prior, but it's part of the data similarity or of the similarity measure, because there is
still the observation in it. And what we want to measure by the second term is, and that was
explained correctly, we want to enforce that pixels in a local neighborhood belong to the
same tissue class. And we do this on the basis of a data similarity term. First of all, we have the
same like above, we just say, we look at a feature indexed by K. So what do we think here about is,
in terms of an image, we have here our image grid, we take this point here, xk, that's the case
position in the image. Hello, the case position in the image. And then we look at the neighborhood
of the case point, the neighborhood could be an image processing. If we do low level image processing,
this is usually some three by three neighborhood or a five by five neighborhood. And then we look
over all the intensities in the neighborhood, we sum over all the intensities in the neighborhood
and measure the distance to the class center ci, weighted by the probability that the case feature
or the case observation belongs to the i-th cluster. So the case point belongs to the i-th
cluster. And then we sum up over all the points. So we walk through the whole image, consider this
for all the points. And then we consider this for all the clusters. And if we have to minimize this
term, it's clear that in a local neighborhood, we want to have small values here. And if the values
here are high, this ai k, broken English is ai k, this will turn out to be smaller. That's what the
optimization procedure will enforce here. So if this is small, this will become high, because then
Presenters
Zugänglich über
Offener Zugang
Dauer
01:28:43 Min
Aufnahmedatum
2014-11-06
Hochgeladen am
2019-04-09 08:49:03
Sprache
en-US
- Modalitäten der medizinischen Bildgebung
-
akquisitionsspezifische Bildvorverarbeitung
-
3D-Rekonstruktion
-
Bildregistrierung